我们提出了基于复发均衡网络的非线性动态控制器的参数化,这是复发性神经网络的概括。我们对控制器保证具有部分观察到的动态系统的指数稳定性的参数化受到限制。最后,我们提出了一种使用投影策略梯度方法合成该控制器的方法,以最大程度地利用任意结构来奖励功能。投影步骤涉及凸优化问题的解决方案。我们通过模拟控制非线性植物(包括用神经网络建模的植物)演示了提出的方法。
translated by 谷歌翻译
许多现有的景点(ROA)分析工具难以解决具有大规模神经网络(NN)政策和/或高维感测模式的反馈系统,如相机。在本文中,我们定制了在对冲学习界中开发的预计梯度下降(PGD)攻击方法作为大型非线性系统的通用ROA分析工具和基于端到端的感知的控制。我们表明ROA分析可以近似为约束的最大化问题,其目标是找到最坏情况的最坏情况初始条件最多。然后我们提出了两个基于PGD的迭代方法,可用于解决所得的受限最大化问题。我们的分析不是基于Lyapunov理论,因此需要问题结构的最低信息。在基于模型的设置中,我们示出了可以使用反向传播有效地执行PGD更新。在无模型设置(与基于感知的控制的ROA分析更相关)中,我们提出了一个有限差异的PGD估计,这是一般的,只需要一个黑盒模拟器来产生闭环系统的轨迹给予任何初始状态。我们在具有大规模NN政策和高维图像观测的几个数字示例下展示了我们分析工具的可扩展性和一般性。我们认为,我们所提出的分析是进一步了解大规模非线性系统的闭环稳定性和基于感知的控制的有意义的初步步骤。
translated by 谷歌翻译
基于政策的强化学习(RL)最近的经验成功,有一项研究趋势,研究了基于政策的RL方法对标准控制基准问题的研究。在本文中,我们研究了基于政策的RL方法的有效性在重要的强大控制问题上,即$ \ mu $综合。我们在强大的对策RL和$ \ mu $综合之间建立连接,并开发出众所周知的$ DK $ antication的无模型版本,用于解决静态$ d $-scaling的状态反馈$ \ mu $ synthesis。在所提出的算法中,$ k $步骤通过将最近开发的双循环对冲RL方法作为子程序来模仿经典的中央路径算法,$ D $步骤基于无模型有限差分近似。还提出了广泛的数值研究以展示我们提出的无模型算法的效用。我们的研究揭示了对抗对抗和鲁棒控制之间的联系。
translated by 谷歌翻译
对手示例可以容易地降低神经网络中的分类性能。提出了促进这些例子的稳健性的实证方法,但往往缺乏分析见解和正式担保。最近,一些稳健性证书在文献中出现了基于系统理论概念的文献。这项工作提出了一种基于增量的耗散性的稳健性证书,用于每个层的线性矩阵不等式形式的神经网络。我们还提出了对该证书的等效光谱标准,该证书可扩展到具有多个层的神经网络。我们展示了对在MNIST培训的前馈神经网络上的对抗对抗攻击的性能和使用CIFAR-10训练的亚历纳特人。
translated by 谷歌翻译
由于它们的灵活性和富有效力,神经网络控制器在控制任务中变得流行。稳定性是安全关键动态系统的关键性质,而在许多情况下,部分观察到的系统的稳定化需要控制器保留和处理过去的长期记忆。我们将重要类别的经常性神经网络(RNN)视为非线性不确定部分观察系统的动态控制器,并基于积分二次约束,S-LEMMA和顺序凸化来推导凸稳定性条件。为了确保学习和控制过程中的稳定性,我们提出了一种预测的政策梯度方法,可迭代地强制执行关于系统动态的温和附加信息的重新制定空间中的稳定条件。数值实验表明,我们的方法在使用较少的样本并与政策梯度相比使用更高的样本并实现更高的最终性能时,学习稳定控制器。
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
This paper presents a machine learning approach to multidimensional item response theory (MIRT), a class of latent factor models that can be used to model and predict student performance from observed assessment data. Inspired by collaborative filtering, we define a general class of models that includes many MIRT models. We discuss the use of penalized joint maximum likelihood (JML) to estimate individual models and cross-validation to select the best performing model. This model evaluation process can be optimized using batching techniques, such that even sparse large-scale data can be analyzed efficiently. We illustrate our approach with simulated and real data, including an example from a massive open online course (MOOC). The high-dimensional model fit to this large and sparse dataset does not lend itself well to traditional methods of factor interpretation. By analogy to recommender-system applications, we propose an alternative "validation" of the factor model, using auxiliary information about the popularity of items consulted during an open-book exam in the course.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
The celebrated FedAvg algorithm of McMahan et al. (2017) is based on three components: client sampling (CS), data sampling (DS) and local training (LT). While the first two are reasonably well understood, the third component, whose role is to reduce the number of communication rounds needed to train the model, resisted all attempts at a satisfactory theoretical explanation. Malinovsky et al. (2022) identified four distinct generations of LT methods based on the quality of the provided theoretical communication complexity guarantees. Despite a lot of progress in this area, none of the existing works were able to show that it is theoretically better to employ multiple local gradient-type steps (i.e., to engage in LT) than to rely on a single local gradient-type step only in the important heterogeneous data regime. In a recent breakthrough embodied in their ProxSkip method and its theoretical analysis, Mishchenko et al. (2022) showed that LT indeed leads to provable communication acceleration for arbitrarily heterogeneous data, thus jump-starting the $5^{\rm th}$ generation of LT methods. However, while these latest generation LT methods are compatible with DS, none of them support CS. We resolve this open problem in the affirmative. In order to do so, we had to base our algorithmic development on new algorithmic and theoretical foundations.
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译